📕 subnode [[@KGBicheno/7 data]]
in 📚 node [[7-data]]
📓
garden/KGBicheno/Artificial Intelligence/Feminist Chatbot/The Feminist Design Tool/7 Data.md by @KGBicheno
7 Data
There are two main areas where data might be problematic.
- Training data for machine learning having its own inherent bias based on how it's chosen
- The collection of data ommitting questions or responses relevant to demographics outside the mainstream
Questions to ask yourself
- How will you collect and treat data through the development of your design?
- Are you aware of how bias might manifest itself in your training data?
- Are you aware of how bias might manifest itself in the AI techniques that power your design (like machine learning)?
- How could stakeholder-generated data and feedback be used to improve the design?
- Will the design learn from the stakeholder’s behaviour, and if so, are you assuming that the design will get it right?
- What mechanisms or features could make these assumptions visible to the stakeholder and empower them to change the assumptions if they want to?
- How will you protect stakeholder data?
Go to [[8 Architecture]]
For the full list see [[The Feminist Design Tool]]
This is part of the [[Feminist Chatbot Main Page]]
📖 stoas
- public document at doc.anagora.org/7-data
- video call at meet.jit.si/7-data